stochastic recursive gradient descent ascent
Review for NeurIPS paper: Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems
What is the lower-bound the paper refer to? - Currently the way that the paper is written does not highlight the novelties of the work. It seems the work combines existing methods for solving min-max with exiting variance reduction modules inside. Please clarify the novelties in the analysis further (the explanation in section 5 does not highlight the difficulties and challenges).
Review for NeurIPS paper: Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems
The reviewers all agree that the improved complexity is new for the class of stochastic nonconvex-strongly-concave minimax problems, and the claim of optimal complexity is clarified in the rebuttal. The weakness pointed out by the reviewers is the seemingly lack of novelty by combining existing techniques of SGDA and SARAH gradient estimators, but the author rebuttal is convincing that the combination in the minimax setting requires innovation. Another weakness is that the SREDA algorithm is rather complex, having multi-level loops and can be hard to tune in practice. Overall I recommend acceptance based on the value of the theoretical improvement. The authors should carefully address the remaining concerns of the reviewers in the revision, especially to clarify the claim of optimal complexity.
Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems
We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires \mathcal O(\kappa 3\varepsilon {-4}) stochastic gradient evaluations, where \kappa is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of {\mathcal O}(\kappa 3\varepsilon {-3}), and its dependency on \varepsilon is optimal for this problem.